discrete variational recurrent topic model
A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables. The model we utilize combines the expressive power of neural methods for representing sequences of text with the topic model's ability to capture global, thematic coherence. Using neural variational inference, we show improved perplexity and document understanding across multiple corpora. We examine the effect of prior parameters both on the model and variational parameters, and demonstrate how our approach can compete and surpass a popular topic model implementation on an automatic measure of topic quality.
Review for NeurIPS paper: A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
Summary and Contributions: In this paper, the authors attempted to utilize neural variational inference to construct a neural topic model with discrete random variables, and proposed one model, namely VRTM, which combine1. The exploration of combining RNNs and topic models is interesting and significant, which can help topic models to handle sequence text and capture more text information than the bag-of-word model, which is prevalently utilized in LDA-based topic models. Specifically, when facing the thematic words, VRTM uses both the RNN and topic model predications to generative the next word; however, when facing the syntactic words, only the output of the RNN is utilized to predict the next word. In particular, during the generative process, the discrete topic assignment has been attached to each thematic word, which is beneficial for the Interpretability. To be specific, the authors first designed one reasonable generative model, which can apply different strategies for generating thematic and syntactic words with different inputs, i.e., a mixture of LDA and RNN predications or just the output of the RNN.
Review for NeurIPS paper: A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
Reviews are all on the accept side: 1 top 50% of accepted and 3 marginally above threshold. Only R4 (strong accept) intervened in the discussion. As the main reason for calling this paper borderline was limited novelty compared to [7], I had to proceed to a detailed comparative rereading of this paper to [7]. In my opinion, this approach is very different from [7]. While the authors presented it as only introducing a small modeling difference from [7], this has a huge impact on everything, in particular the resulting DNN architecture and the inference process.
A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables. The model we utilize combines the expressive power of neural methods for representing sequences of text with the topic model's ability to capture global, thematic coherence. Using neural variational inference, we show improved perplexity and document understanding across multiple corpora. We examine the effect of prior parameters both on the model and variational parameters, and demonstrate how our approach can compete and surpass a popular topic model implementation on an automatic measure of topic quality.